2 research outputs found

    Unsupervised multilingual learning

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 241-254).For centuries, scholars have explored the deep links among human languages. In this thesis, we present a class of probabilistic models that exploit these links as a form of naturally occurring supervision. These models allow us to substantially improve performance for core text processing tasks, such as morphological segmentation, part-of-speech tagging, and syntactic parsing. Besides these traditional NLP tasks, we also present a multilingual model for lost language deciphersment. We test this model on the ancient Ugaritic language. Our results show that we can automatically uncover much of the historical relationship between Ugaritic and Biblical Hebrew, a known related language.by Benjamin Snyder.Ph.D

    Multiple aspect ranking for opinion analysis

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 71-74).We address the problem of analyzing multiple related opinions in a text. For instance, in a restaurant review such opinions may include food, ambiance and service. We formulate this task as a multiple aspect ranking problem, where the goal is to produce a set of numerical scores, one for each aspect. We present an algorithm that jointly learns ranking models for individual aspects by modeling the dependencies between assigned ranks. This algorithm guides the prediction of individual rankers by analyzing meta-relations between opinions, such as agreement and contrast. We provide an online training algorithm for our joint model which trains the individual rankers to operate in our framework. We prove that our agreement-based joint model is more expressive than individual ranking models, yet our training algorithm preserves the convergence guarantees of perceptron rankers. Our empirical results further confirm the strength of the model: the algorithm provides significant improvement over both individual rankers, a state-of-the-art joint ranking model, and ad-hoc methods for incorporating agreement.by Benjamin Snyder.S.M
    corecore